recognition of stop consonants in babble noise in normal hearing individuals
نویسندگان
چکیده
background and aim: speech understanding almost never occurs in silence. verbal communication often occurs in environments where multiple speakers are talking. in such environments, babbling noise masks speech comprehension. consonants, in comparison to vowels, are more sensitive to noise masking. consonants provide most acoustic information needed for comprehending the meaning of the word. since stop consonants have low intensity, they can be easily masked by noise, and finally tend to lead to speech disorder. this study determines the effect of babble noise on the recognition score of stop consonants. methods: this cross-sectional study was performed on 48 participants, males and females in equal number, aged between 19 and 24 years, with normal hearing. in addition to auditory and speech evaluation, recognition of stop consonants in a consonant–vowel–consonant syllable at the presence of babbling noise was tested. results: by increasing the noise, the recognition score of stop consonants at the beginning of the syllable was reduced. there was a meaningful difference between the recognition score of stop consonants at the beginning of the word and vowels in the signal-to-noise ratio of 0, -5, and -10 (p=0.000). besides, the average recognition score of /b/, /d/, /k/, and /ʔ / was found to be greater than /p/, /t/,/g/, and /q/ (p<0.0005). gender had no significant effects. conclusions: increased babble noise levels significantly reduce the recognition score of stop consonants, and this reduction is more in some voiced stop consonants as well as some voiceless stop consonants.
منابع مشابه
Stimulus factors influencing the identification of voiced stop consonants by normal-hearing and hearing-impaired adults.
The effects of mild-to-moderate hearing impairment on the perceptual importance of three acoustic correlates of stop consonant place of articulation were examined. Normal-hearing and hearing-impaired adults identified a stimulus set comprising all possible combinations of the levels of three factors: formant transition type (three levels), spectral tilt type (three levels), and abruptness of fr...
متن کاملThe relation between speech recognition in noise and the speech-evoked brainstem response in normal-hearing and hearing-impaired individuals
Little is known about the way speech in noise is processed along the auditory pathway. The purpose of this study was to evaluate the relation between listening in noise using the R-Space system and the neurophysiologic response of the speech-evoked auditory brainstem when recorded in quiet and noise in adult participants with mild to moderate hearing loss and normal hearing.
متن کاملSpeech recognition in noise by hearing-impaired and noise-masked normal-hearing listeners.
A prevailing complaint among individuals with sensorineural hearing loss (SNHL) is difficulty understanding speech, particularly under adverse listening conditions. The present investigation compared the speech-recognition abilities of listeners with mild to moderate degrees of SNHL to normal-hearing individuals with simulated hearing impairments, accomplished using spectrally shaped masking no...
متن کاملThe effect of speaking rate and vowel context on the perception of consonants in babble noise
In this paper, we study human perception of consonants in the presence of additive babble noise at two speaking rates. In addition, we work on a model that attempts to replicate these human results through a phoneme recognition model. Consonant-Vowel-Consonant (CVC) stimuli comprising of a set of 13 consonants and 3 vowels (/a/, /i/, /u/) were recorded in a sound proof booth by two talkers at t...
متن کاملMissing information in spoken word recognition: nonreleased stop consonants.
Cross-modal semantic priming and phoneme monitoring experiments investigated processing of word-final nonreleased stop consonants (e.g., kit may be pronounced /kit/ or /ki/), which are common phonological variants in American English. Both voiced /d/ and voiceless /t/ segments were presented in release and no-release versions. A cross-modal semantic priming task (Experiment 1) showed comparable...
متن کاملSeparation of stop consonants
To extract speech from acoustic interference is a challenging problem. Previous systems based on auditory scene analysis principles deal with voiced speech, but cannot separate unvoiced speech. We propose a novel method to separate stop consonants, which contain significant unvoiced signals, based on their acoustic properties. The method employs onset as the major grouping cue; it first detects...
متن کاملمنابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
auditory and vestibular researchجلد ۲۴، شماره ۱، صفحات ۳۱-۳۷
کلمات کلیدی
میزبانی شده توسط پلتفرم ابری doprax.com
copyright © 2015-2023